Data Science Course

      Xavier Bresson, Sept. 2016

Lecture 12 : Deep Learning 4 - Recurrent Neural Networks

Code 1 : Vanilla Recurrent Neural Networks

Note: Code is from F.F. Li, A. Karpathy, J. Johnson’s course on Deep Learning


In [1]:
# Import functions in lib folder
import sys
sys.path.insert(0, 'lib/')

In [7]:
# Run RNNs on shakespeare
%run lib/min-char-rnn


data has 1115394 characters, 65 unique.
learning_rate= 0.1
----
 XTiyHbrO,aIf;bCMWsIazANgDR&Cc&oTdjoM
xWp$V!mPa$Hx?Zhhvp&;FakOX,N!Dh&aC
rb$!lLNbRnPtG3rnXtQkLSzpJXJKZYcWC-WTFkraDAWgGenzTKYKiR$jSlLgMQ?oxXFxBaD;m.E3OSEBqLPNvl$KaFBkaNoM,vflV,woRaxVXTQW
Ddokl:'RFRCDwALK 
----
iter 0, loss: 104.359688
----
 sanlnd wincesa, us,, uan sorsysu se e, wawite'is caos las: 'e atu, bu fs fibecin's- shord if, ha Catd
Booblt ziu you seul s har.s band:eNOIUS:
moa:, hus fin.

She wofe.
Oe
Hy lzs.

h maw' co couard ma 
----
iter 2500, loss: 65.675243
----
 e fis un sit hat te nimy and onr boder deblR cuthrp; nosthlipre hob his won, ie ratr hy ie axiec bees ns ho to the wivt se he!
Asd mo yro Catp the deme ce ho le ;.

CIUS:
The stin wo.

FENIUSt
And Set 
----
iter 5000, loss: 56.644078
----
 st;
Rwoop.
Heand lich in theves.

Masco
Hent.
 ourinhyr.

M OERGEB:
Heowaane peekevete; gring he:
And terlt preete sithes.

QUENE
Thet mage beeve bitam!
Lelgis mland ble haveres meocletadeste:
Prathba 
----
iter 7500, loss: 55.170600
----
  An hawlane pralite hame
Theor,
What, ore,
Os lot rithen, getend; andrend and te cath ncanba, Her herryely in maurell the done,
Rholl;
The kery in fisterdise no tes in ithein of Lirter's tises lone yo 
----
iter 10000, loss: 52.768640
----
 oce, tise fenoll our woct the ency fors RICINL:
No fodind htet dood to hardery-f'g, prome,
No cornel, mame aid will.

CORDZARDLFIY:
Shampull;

Theve hichimt'l dyshel qulyrindy:
Repstryron, thith ifus, 
----
iter 12500, loss: 51.847006
----
 rese the cost's my befy;
Is wath senscixst thoudeot; merear not, chorceist urver in ged,
My our fare
Hath of on lowthe deice ro eeting ous hars foo
Bot alicit hesbyour lald, wild your
Co lesees, hripm 
----
iter 15000, loss: 52.128861
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
/Users/xavier/Documents/10_EPFL_2014_now/44_Formation_Continue_DS/16_Website/DEV/v3/code_lecture12/lib/min-char-rnn.py in <module>()
    101 
    102   # forward seq_length characters through the net and fetch gradient
--> 103   loss, dWxh, dWhh, dWhy, dbh, dby, hprev = lossFun(inputs, targets, hprev)
    104   smooth_loss = smooth_loss * 0.999 + loss * 0.001
    105   if n % 2500 == 0: print 'iter %d, loss: %f' % (n, smooth_loss) # print progress

/Users/xavier/Documents/10_EPFL_2014_now/44_Formation_Continue_DS/16_Website/DEV/v3/code_lecture12/lib/min-char-rnn.py in lossFun(inputs, targets, hprev)
     41     xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation
     42     xs[t][inputs[t]] = 1
---> 43     hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state
     44     ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars
     45     ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars

KeyboardInterrupt: 

In [ ]: